8 research outputs found

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    Convergence Properties of Fast quasi-LPV Model Predictive Control

    Full text link
    In this paper, we study the convergence properties of an iterative algorithm for fast nonlinear model predictive control of quasi-linear parameter-varying systems without inequality constraints. Compared to previous works considering this algorithm, we contribute conditions under which the iterations are guaranteed to converge. Furthermore, we show that the algorithm converges to suboptimal solutions and propose an optimality-preserving variant with moderately increased computational complexity. Finally, we compare both variants in terms of quality of solution and computational performance with a state-of-the-art solver for nonlinear model predictive control in two simulation benchmarks.Comment: 6 pages, 2 figures. Corrects a mistake in Lemma 1 compared to the conference version, the changes are highlighted in blu

    Targeted Branching for the Maximum Independent Set Problem

    Get PDF
    Finding a maximum independent set is a fundamental NP-hard problem that is used in many real-world applications. Given an unweighted graph, this problem asks for a maximum cardinality set of pairwise non-adjacent vertices. In recent years, some of the most successful algorithms for solving this problem are based on the branch-and-bound or branch-and-reduce paradigms. In particular, branch-and-reduce algorithms, which combine branch-and-bound with reduction rules, have been able to achieve substantial results, solving many previously infeasible real-world instances. These results were to a large part achieved by developing new, more practical reduction rules. However, other components that have been shown to have a significant impact on the performance of these algorithms have not received as much attention. One of these is the branching strategy, which determines what vertex is included or excluded in a potential solution. Even now, the most commonly used strategy selects vertices solely based on their degree and does not take into account other factors that contribute to the performance of the algorithm. In this work, we develop and evaluate several novel branching strategies for both branch-and-bound and branch-and-reduce algorithms. Our strategies are based on one of two approaches which are motivated by existing research. They either (1) aim to decompose the graph into two or more connected components which can then be solved independently, or (2) try to remove vertices that hinder the application of a reduction rule which can lead to smaller graphs. Our experimental evaluation on a large set of real-world instances indicates that our strategies are able to improve the performance of the state-of-the-art branch-and-reduce algorithm by Akiba and Iwata. To be more specific, our reduction-based packing branching rule is able to outperform the default branching strategy of selecting a vertex of highest degree on 65% of all instances tested. Furthermore, our decomposition-based strategy based on edge cuts is able to achieve a speedup of 2.29 on sparse networks (1.22 on all instances)

    Robust Performance Analysis of Cooperative Control Dynamics via Integral Quadratic Constraints

    Full text link
    We study cooperative control dynamics with gradient based forcing terms. As a specific example, we focus on source-seeking dynamics with vehicles embedded in an unknown, strongly convex scalar field with a subset of agents having gradient based forcing terms and consider formation control dynamics (with convex interactions) and flocking dynamics (with non-convex interactions) as possible interaction mechanisms. We leverage the framework of α\alpha-integral quadratic constraints to obtain convergence rate estimates whenever exponential stability can be achieved. The communication graph and the flocking interaction potential is assumed time-invariant and uncertain. Sufficient conditions take the form of linear matrix inequalities independent of the size of the network. A derivation (purely in time-domain) of the so-called \textit{hard} Zames-Falb α\alpha-IQCs involving general non-causal higher order multipliers is given along with a suitably adapted parameterization of the multipliers to the α\alpha-IQC setting. The time-domain arguments facilitate a straightforward extension to linear parameter varying systems. Numerous examples illustrate the application of theoretical results.Comment: arXiv admin note: substantial text overlap with arXiv:2110.06369 author's note: added a main contributions list, minor changes to title, abstract, introduction, added a better example in Fig. 8, all other results unchange

    A Decomposition Approach to Multi-Agent Systems with Bernoulli Packet Loss

    Full text link
    In this paper, we extend the decomposable systems framework to multi-agent systems with Bernoulli distributed packet loss with uniform probability. The proposed sufficient analysis conditions for mean-square stability and H2H_2-performance - which are expressed in the form of linear matrix inequalities - scale linearly with increased network size and thus allow to analyse even very large-scale multi-agent systems. A numerical example demonstrates the potential of the approach by application to a first-order consensus problem.Comment: 11 pages, 4 figure

    Synopsis of an integrated guidance for enhancing the care of familial hypercholesterolaemia: an Australian perspective

    Get PDF
    Summary: Introduction: Familial hypercholesterolaemia (FH) is a common, heritable and preventable cause of premature coronary artery disease, with significant potential for positive impact on public health and healthcare savings. New clinical practice recommendations are presented in an abridged guidance to assist practitioners in enhancing the care of all patients with FH. Main recommendations: Core recommendations are made on the detection, diagnosis, assessment and management of adults, children and adolescents with FH. There is a key role for general practitioners (GPs) working in collaboration with specialists with expertise in lipidology. Advice is given on genetic and cholesterol testing and risk notification of biological relatives undergoing cascade testing for FH; all healthcare professionals should develop skills in genomic medicine. Management is under-pinned by the precepts of risk stratification, adherence to healthy lifestyles, treatment of non-cholesterol risk factors, and appropriate use of low-density lipoprotein (LDL)-cholesterol lowering therapies, including statins, ezetimibe and proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors. Recommendations on service design are provided in the full guidance. Potential impact on care of FH: These recommendations need to be utilised using judicious clinical judgement and shared decision making with patients and families. Models of care need to be adapted to both local and regional needs and resources. In Australia new government funded schemes for genetic testing and use of PCSK9 inhibitors, as well as the National Health Genomics Policy Framework, will enable adoption of these recommendations. A broad implementation science strategy is, however, required to ensure that the guidance translates into benefit for all families with FH
    corecore